1,288 research outputs found

    Application of data mining in scheduling of single machine system

    Get PDF
    The rapidly growing field of data mining has the potential of improving performance of existing scheduling systems. Such systems generate large amounts of data, which is often not utilized to its potential. The problem is whether it is possible to discover the implicit knowledge behind scheduling practice and then, with this knowledge, we could improve current scheduling practice. In this dissertation, we propose a novel methodology for generating scheduling rules using a data-driven approach. We show how to use data mining to discover previously unknown dispatching rules by applying the learning algorithms directly to production data. We also consider how by using this new approach unexpected knowledge and insights can be obtained, in a manner that would not be possible if an explicit model of the system or the basic scheduling rules had to be obtained beforehand. However, direct data mining of production data can at least mimic scheduling practices. The problem is whether scheduling practice could be improved with the knowledge discovered by data mining. We propose to combine data mining with optimization for effective production. In this approach, we use a genetic algorithm to find a heuristic solution to the optimal instances selection problem, and then induce a decision tree from this subset of instances. The optimal instance selection can be viewed as determining the best practices from what has been done in the past, and the data mining can then learn new dispatching rules from those best practices

    A Combined Fit on the Annihilation Corrections in Bu,d,sB_{u,d,s} →\to PPPP Decays Within QCDF

    Get PDF
    Motivated by the possible large annihilation contributions implied by recent CDF and LHCb measurements on nonleptonic annihilation B meson decays, and the refined experimental measurements on hadronic B meson decays, we study the strength of annihilation contributions within QCD factorization (QCDF) in this paper. With the available measurements of two-body B_{u,d,s} -> pi pi, pi K, K K decays, a comprehensive fit on the phenomenological parameters X_A^{i,f} (or rho_A^{i,f} and phi_A^{i,f}) which are used to parameterize the endpoint singularity in annihilation amplitudes is performed with the statistical chi^2 approach. It is found that (1) flavor symmetry breaking effects are hardly to be distinguished between X_{A,s}^i and X_{A,d}^i due to the large experimental errors and theoretical uncertainties, where X_{A,s}^i and X_{A,d}^i are related to the nonfactorization annihilation contributions in B_s and B_{u,d} decays, respectively. So X_{A,s}^i = X_{A,d}^i is a good approximation by now. (2) In principle, parameter X_{A}^f which is related to the factorization annihilation contributions and independent of the initial state can be regarded as the same variable for B_{u,d,s} decays. (3) Numerically, two solutions are found, one is (rho_A^i, phi_A^i) = (2.98^+1.12_-0.86,-105^+34_-24) and (rho_A^f, phi_A^f) = (1.18^+0.20_-0.23,-40^+11_-8), the other is (rho_A^i, phi_A^i) = (2.97^+1.19_-0.90,-105^+32_-24) and (rho_A^f, phi_A^f) = (2.80^+0.25_-0.21,165^+4_-3). Obviously, nonfactorization annihilation parameter X_A^i is generally unequal to factorization annihilation parameter X_A^f, which differ from the traditional treatment. With the fitted parameters, all results for observables of B_{u,d,s} ->pi pi, pi K, K K decays are in good agreement with experimental data.Comment: 12 pages, version accepted by PL

    DISC: Deep Image Saliency Computing via Progressive Representation Learning

    Full text link
    Salient object detection increasingly receives attention as an important component or step in several pattern recognition and image processing tasks. Although a variety of powerful saliency models have been intensively proposed, they usually involve heavy feature (or model) engineering based on priors (or assumptions) about the properties of objects and backgrounds. Inspired by the effectiveness of recently developed feature learning, we provide a novel Deep Image Saliency Computing (DISC) framework for fine-grained image saliency computing. In particular, we model the image saliency from both the coarse- and fine-level observations, and utilize the deep convolutional neural network (CNN) to learn the saliency representation in a progressive manner. Specifically, our saliency model is built upon two stacked CNNs. The first CNN generates a coarse-level saliency map by taking the overall image as the input, roughly identifying saliency regions in the global context. Furthermore, we integrate superpixel-based local context information in the first CNN to refine the coarse-level saliency map. Guided by the coarse saliency map, the second CNN focuses on the local context to produce fine-grained and accurate saliency map while preserving object details. For a testing image, the two CNNs collaboratively conduct the saliency computing in one shot. Our DISC framework is capable of uniformly highlighting the objects-of-interest from complex background while preserving well object details. Extensive experiments on several standard benchmarks suggest that DISC outperforms other state-of-the-art methods and it also generalizes well across datasets without additional training. The executable version of DISC is available online: http://vision.sysu.edu.cn/projects/DISC.Comment: This manuscript is the accepted version for IEEE Transactions on Neural Networks and Learning Systems (T-NNLS), 201

    MoT: Memory-of-Thought Enables ChatGPT to Self-Improve

    Full text link
    Large Language Models (LLMs) have shown impressive abilities in various tasks. However, fundamentally improving them depends on high-quality datasets or computationally expensive fine-tuning. On the contrary, humans can easily improve themselves by self-thinking and memory, without external resources. In this paper, we propose a framework, MoT, to let the LLM self-improve through Memory-of-Thought, without annotated datasets and parameter updates. Specifically, MoT is divided into two stages: 1. before the test stage, the LLM pre-thinks on the unlabeled dataset and saves the high-confidence thoughts as external memory; 2. During the test stage, given a test question, the LLM recalls relevant memory to help itself reason and answer it. Experimental results show that MoT can help ChatGPT significantly improve its abilities in arithmetic reasoning, commonsense reasoning, factual reasoning, and natural language inference. Further analyses show that each component contributes critically to the improvements and MoT can lead to consistent improvements across various CoT methods and LLMs.Comment: Accepted to appear at EMNLP 202

    Fault Diagnosis for Substation with Redundant Protection Configuration Based on Time-Sequence Fuzzy Petri-Net

    Get PDF
    Due to timing inconsistency, dual protection configuration and uncertainty diagnosis result characteristics of 750kV substation, fault diagnosis method of substation with redundant protection configuration which based on time sequence fuzzy Petri nets is proposed. In this method, redundant knowledge about fault component is represented by using two sets of protected information. On that basis, component redundant diagnosis-model based on time sequence fuzzy Petri net is constructed, which can be decomposed into main and redundant subnet-model. In this model, initial-information credibility is determined using information-entropy, timing constraint is checked, and initial-information credibility is corrected using the relationship between acted protection and breaker. Compared with fuzzy Petri net diagnosis method take no account of timing constraint, this method can not only identify the malfunction information, but also obtain a certain result

    Finding Support Examples for In-Context Learning

    Full text link
    Additionally, the strong dependency among in-context examples makes it an NP-hard combinatorial optimization problem and enumerating all permutations is infeasible. Hence we propose LENS, a fiLter-thEN-Search method to tackle this challenge in two stages: First we filter the dataset to obtain informative in-context examples individually. Specifically, we propose a novel metric, InfoScore, to evaluate the example's in-context informativeness based on the language model's feedback, and further propose a progressive filtering process to filter out uninformative examples. Then we propose diversity-guided example search which iteratively refines and evaluates the selected example permutations, to find examples that fully depict the task. The experimental results show that LENS significantly outperforms a wide range of baselines.Comment: Accepted to the Findings of EMNLP 202

    Matroidal approaches to rough sets via closure operators

    Get PDF
    AbstractThis paper studies rough sets from the operator-oriented view by matroidal approaches. We firstly investigate some kinds of closure operators and conclude that the Pawlak upper approximation operator is just a topological and matroidal closure operator. Then we characterize the Pawlak upper approximation operator in terms of the closure operator in Pawlak matroids, which are first defined in this paper, and are generalized to fundamental matroids when partitions are generalized to coverings. A new covering-based rough set model is then proposed based on fundamental matroids and properties of this model are studied. Lastly, we refer to the abstract approximation space, whose original definition is modified to get a one-to-one correspondence between closure systems (operators) and concrete models of abstract approximation spaces. We finally examine the relations of four kinds of abstract approximation spaces, which correspond exactly to the relations of closure systems

    The Application and Revelation of Joseph Nye’s Soft Power Theory

    Get PDF
    Since the theory of the Soft Power proposed by Joseph Nye came into being, this theory attracted lots of attention. However, the research on the source of the theory is limited. This paper introduces the Soft Power Theory, the source of the Soft Power Theory, Joseph Nye of soft power theory under Different Countries application analysis and its revelation.Key words: The Soft Power Theory; The source of the theory; Application analysis; Revelatio
    • …
    corecore